Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Disabil Rehabil Assist Technol ; 16(3): 280-288, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-31694420

RESUMO

BACKGROUND: Deep learning systems have improved performance of devices through more accurate object detection in a significant number of areas, for medical aid in general, and also for navigational aids for the visually impaired. Systems addressing different needs are available, and many manage effectively the detection of static obstacles. PURPOSE: This research provides a review of deep learning systems used with navigational tools for the visually Impaired and a framework for guidance for future research. METHODS: We compare current deep learning systems used with navigational tools for the visually impaired and compile a taxonomy of indispensable features for systems. RESULTS: Challenges to detection. Our taxonomy of improved navigational systems shows that it is sufficiently robust to be generally applied. CONCLUSION: This critical analysis is, to the best of our knowledge, the first of its kind and will provide a much-needed overview of the field.Implication for RehabilitationDeep learning systems can provide lost cost solutions for the visually impaired.Of these, convolutional neural networks (CNN) and fully convolutional neural networks (FCN) show great promise in terms of the development of multifunctional technology for the visually impaired (i.e., being less specific task oriented).CNN have also potential for overcoming challenges caused by moving and occluded objects.This work has also highlighted a need for greater emphasis on feedback to the visually impaired which for many technologies is limited.


Assuntos
Aprendizado Profundo , Tecnologia Assistiva , Pessoas com Deficiência Visual/reabilitação , Dispositivos Eletrônicos Vestíveis , Humanos , Reconhecimento Automatizado de Padrão
2.
Int J Med Robot ; 16(3): e2097, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32091649

RESUMO

BACKGROUND AND AIM: Jaw surgery based on augmented reality (AR) still has limitations in terms of navigating narrow areas. Surgeons need to avoid nerves, vessels, and teeth in their entirety, not just root canals. Inaccurate positioning of the surgical instrument may lead to positional or navigational errors and can result in cut blood vessels, nerve channels, or root canals. This research aims to decrease the positional error during surgery and improve navigational accuracy by reducing the positional error. METHODOLOGY: The proposed 2D/3D system tracks the surgical instrument, consisting of the shaft and the cutting element, each part being assigned a different feature description. In the case of the 3D position estimation, the input vector is composed of image descriptors of the instrument and the output value consists of 3D coordinates of the cutter. RESULTS: Sample results from a jawbone-maxillary and mandibular jaw-demonstrate that the positional error is reduced. The system, thus, led to an improvement in alignment of the video accuracy by 0.25 to 0.35 mm from 0.40 to 0.55 mm and a decrease in processing time of 11 to 14 frames per second (fps) against 8 to 12 fps of existing solutions. CONCLUSION: The proposed system is focused on overlaying only on the area to be operated on. Thus, this AR-based study contributes to accuracy in navigation of the deeper anatomical corridors through increased accuracy in positioning of surgical instruments.


Assuntos
Realidade Aumentada , Procedimentos Cirúrgicos Ortognáticos , Cirurgia Assistida por Computador , Algoritmos , Humanos , Imageamento Tridimensional
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA